9 research outputs found
Recommended from our members
Speech Intelligibility and Spatial Release From Masking Improvements Using Spatial Noise Reduction Algorithms in Bimodal Cochlear Implant Users.
This study investigated the speech intelligibility benefit of using two different spatial noise reduction algorithms in cochlear implant (CI) users who use a hearing aid (HA) on the contralateral side (bimodal CI users). The study controlled for head movements by using head-related impulse responses to simulate a realistic cafeteria scenario and controlled for HA and CI manufacturer differences by using the master hearing aid platform (MHA) to apply both hearing loss compensation and the noise reduction algorithms (beamformers). Ten bimodal CI users with moderate to severe hearing loss contralateral to their CI participated in the study, and data from nine listeners were included in the data analysis. The beamformers evaluated were the adaptive differential microphones (ADM) implemented independently on each side of the listener and the (binaurally implemented) minimum variance distortionless response (MVDR). For frontal speech and stationary noise from either left or right, an improvement (reduction) of the speech reception threshold of 5.4 dB and 5.5 dB was observed using the ADM, and 6.4 dB and 7.0 dB using the MVDR, respectively. As expected, no improvement was observed for either algorithm for colocated speech and noise. In a 20-talker babble noise scenario, the benefit observed was 3.5 dB for ADM and 7.5 dB for MVDR. The binaural MVDR algorithm outperformed the bilaterally applied monaural ADM. These results encourage the use of beamformer algorithms such as the ADM and MVDR by bimodal CI users in everyday life scenarios
Coherent Coding of Enhanced Interaural Cues Improves Sound Localization in Noise With Bilateral Cochlear Implants
Bilateral cochlear implant (BCI) users only have very limited spatial hearing abilities. Speech coding strategies transmit interaural level differences (ILDs) but in a distorted manner. Interaural time difference (ITD) information transmission is even more limited. With these cues, most BCI users can coarsely localize a single source in quiet, but performance quickly declines in the presence of other sound. This proof-of-concept study presents a novel signal processing algorithm specific for BCIs, with the aim to improve sound localization in noise. The core part of the BCI algorithm duplicates a monophonic electrode pulse pattern and applies quasistationary natural or artificial ITDs or ILDs based on the estimated direction of the dominant source. Three experiments were conducted to evaluate different algorithm variants: Experiment 1 tested if ITD transmission alone enables BCI subjects to lateralize speech. Results showed that six out of nine BCI subjects were able to lateralize intelligible speech in quiet solely based on ITDs. Experiments 2 and 3 assessed azimuthal angle discrimination in noise with natural or modified ILDs and ITDs. Angle discrimination for frontal locations was possible with all variants, including the pure ITD case, but for lateral reference angles, it was only possible with a linearized ILD mapping. Speech intelligibility in noise, limitations, and challenges of this interaural cue transmission approach are discussed alongside suggestions for modifying and further improving the BCI algorithm
Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing
For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types
Evaluating Spatial Hearing Using a Dual-Task Approach in a Virtual-Acoustics Environment.
Spatial hearing is critical for communication in everyday sound-rich environments. It is important to gain an understanding of how well users of bilateral hearing devices function in these conditions. The purpose of this work was to evaluate a Virtual Acoustics (VA) version of the Spatial Speech in Noise (SSiN) test, the SSiN-VA. This implementation uses relatively inexpensive equipment and can be performed outside the clinic, allowing for regular monitoring of spatial-hearing performance. The SSiN-VA simultaneously assesses speech discrimination and relative localization with changing source locations in the presence of noise. The use of simultaneous tasks increases the cognitive load to better represent the difficulties faced by listeners in noisy real-world environments. Current clinical assessments may require costly equipment which has a large footprint. Consequently, spatial-hearing assessments may not be conducted at all. Additionally, as patients take greater control of their healthcare outcomes and a greater number of clinical appointments are conducted remotely, outcome measures that allow patients to carry out assessments at home are becoming more relevant. The SSiN-VA was implemented using the 3D Tune-In Toolkit, simulating seven loudspeaker locations spaced at 30° intervals with azimuths between -90° and +90°, and rendered for headphone playback using the binaural spatialization technique. Twelve normal-hearing participants were assessed to evaluate if SSiN-VA produced patterns of responses for relative localization and speech discrimination as a function of azimuth similar to those previously obtained using loudspeaker arrays. Additionally, the effect of the signal-to-noise ratio (SNR), the direction of the shift from target to reference, and the target phonetic contrast on performance were investigated. SSiN-VA led to similar patterns of performance as a function of spatial location compared to loudspeaker setups for both relative localization and speech discrimination. Performance for relative localization was significantly better at the highest SNR than at the lowest SNR tested, and a target shift to the right was associated with an increased likelihood of a correct response. For word discrimination, there was an interaction between SNR and word group. Overall, these outcomes support the use of virtual audio for speech discrimination and relative localization testing in noise
Recommended from our members
SSiN-VA outcomes for twelve normal-hearing participants
Spatial hearing is critical for communication in everyday multi-talker, sound-rich environments. To gain an understanding of how well users of bilateral hearing devices function in complex sound environments, and to be able to regularly monitor their performance outside of the clinics, we have implemented a Virtual Acoustics (VA) version of the Spatial Speech in Noise (SSiN) test (1), named the SSiN-VA.
The SSiN-VA allows for simultaneous assessment of speech discrimination and relative localisation with changing source locations in the presence of noise. The use of this dual-task paradigm increases the cognitive load to better represent the difficulties faced by listeners in noisy real-world environments.
For many current speech assessments, patients need to visit a clinic and undergo testing using a multi loudspeaker array. This is time consuming for the patient and clinician. The equipment is costly and has a large footprint, taking up vital clinical space. In reality this often means that spatial hearing assessments are not conducted at all. As we move towards a clinical model where patients take greater control of their healthcare outcomes and a greater number of clinical appointments are conducted remotely, outcome measures that allow patients to carry out assessments at home are becoming more relevant.
The SSiN-VA was implemented using the 3D Tune-In Toolkit (2) to simulate seven loudspeaker locations, spaced at 30° intervals with azimuths between +90° and –90°, and rendered for headphone playback using the binaural spatialisation technique. Twelve normal-hearing participants were assessed to evaluate if the virtual implementation of the test produced similar results to using a loudspeaker array. They were tested at three different individually selected speech-to-noise ratios (SNRs).
1. Bizley JK, Elliott N, Wood KC, Vickers DA. Simultaneous assessment of speech identification and spatial discrimination: A potential testing approach for bilateral cochlear implant users? Trends in Hearing. 2015 Dec 1;19:2331216515619573.
2. Cuevas-RodrÃguez M, Picinali L, González-Toledo D, Garre C, de la Rubia-Cuestas E, Molina-Tanco L, et al. 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation. PloS one. 2019;14(3):1–37.MSC was funded by Imperial Confidence in Concept, Imperial Biomedical Research Centre (BRC). DAV and MSC were funded by a Programme Grant for Applied Research (NIHR201608). The views expressed are those of the author(s) and not necessarily those of the NIHR or the Department of Health and Social Care. MSC, BW, and DAV were funded by the Medical Research Council (MRC) UK, Grant code MR/S002537/1
A model framework for simulating spatial hearing of bilateral cochlear implant users
Bilateral cochlear implants (CIs) greatly improve spatial hearing acuity for CI users, but substantial gaps still exist compared to normal-hearing listeners. For example, CI users have poorer localization skills, little or no binaural unmasking, and reduced spatial release from masking. Multiple factors have been identified that limit binaural hearing with CIs. These include degradation of cues due to the various sound processing stages, the viability of the electrode-neuron interface, impaired brainstem neurons, and deterioration in connectivity between different cortical layers. To help quantify the relative importance and inter-relationship between these factors, computer models can and arguably should be employed. While models exploring single stages are often in good agreement with selected experimental data, their combination often does not yield a comprehensive and accurate simulation of perception. Here, we combine information from CI sound processing with computational auditory model stages in a modular and open-source framework, resembling an artificial bilateral CI user. The main stages are (a) binaural signal generation with optional head-related impulse response filtering, (b) generic CI sound processing not restricted to a specific manufacturer, (c) electrode-to-neuron transmission, (d) binaural interaction, and (e) a decision model. The function and the outputs of different model stages are demonstrated with examples of localization experiments. However, the model framework is not tailored to a specific dataset. It offers a selection of sound coding strategies and allows for third-party model extensions or substitutions; thus, it is possible to employ the model for a wide range of binaural applications and even for educational purposes